skip to main content


Search for: All records

Creators/Authors contains: "Li Shen"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The latest developments in vehicle-to-infrastructure (V2I) and vehicle-to-anything (V2X) technologies enable all the entities in the transportation system to communicate and collaborate to optimize transportation safety, mobility, and equity at the system level. On the other hand, the community of researchers and developers is becoming aware of the critical role of roadway infrastructure in realizing automated driving. In particular, intelligent infrastructure systems, which leverage modern sensors, artificial intelligence, and communication capabilities, can provide critical information and control support to connected and/or automated vehicles to fulfill functions that are infeasible for automated vehicles alone due to technical or cost considerations. However, there is limited research on formulating and standardizing the intelligence levels of road infrastructure to facilitate the development, as the SAE automated driving levels have done for automated vehicles. This article proposes a five-level intelligence definition for intelligent roadway infrastructure, namely, connected and automated highway (CAH). The CAH is a subsystem of the more extensive collaborative automated driving system (CADS), along with the connected automated vehicle (CAV) subsystem. Leveraging the intelligence definition of CAH, the intelligence definition for the CADS is also defined. Examples of how the CAH at different levels operates with the CAV in the CADS are also introduced to demonstrate the dynamic allocation of various automated driving tasks between different entities in the CADS.
     
    more » « less
    Free, publicly-accessible full text available December 31, 2024
  2. null (Ed.)
    The size of Transformer models is growing at an unprecedented rate. It has taken less than one year to reach trillion-level parameters since the release of GPT-3 (175B). Training such models requires both substantial engineering efforts and enormous computing resources, which are luxuries most research teams cannot afford. In this paper, we propose PipeTransformer, which leverages automated elastic pipelining for efficient distributed training of Transformer models. In PipeTransformer, we design an adaptive on the fly freeze algorithm that can identify and freeze some layers gradually during training, and an elastic pipelining system that can dynamically allocate resources to train the remaining active layers. More specifically, PipeTransformer automatically excludes frozen layers from the pipeline, packs active layers into fewer GPUs, and forks more replicas to increase data-parallel width. We evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on SQuAD and GLUE datasets. Our results show that compared to the state-of-the-art baseline, PipeTransformer attains up to 2:83- fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design. Finally, we have modularized our training system with flexible APIs and made the source code publicly available at https://DistML.ai. 
    more » « less
  3. null (Ed.)